The Wiley Blackwell Handbook of Judgment and Decision Making

نویسندگان

  • Daniel M. Bartels
  • Christopher W. Bauman
  • Fiery A. Cushman
  • David A. Pizarro
  • A. Peter McGraw
چکیده

ed from actual and possible preferences. Welfare interests consist in just that set of generalized resources that will be necessary for people to have before pursuing any of the more particular preferences that they might happen to have. Health, money, shelter, sustenance, and such like are all demonstrably welfare interests of this sort, useful resources whatever people’s particular projects and plans. (Goodin, 1993, p. 242) Deontology is the view that the moral status of an action should be evaluated based on qualities of the action, independent of its consequences. Actions are intrinsically wrong if they violate moral rules, such as those that specify rights, duties, and obligations. While deontology acknowledges that the consequences of an act are relevant for determining its moral status, considerations of moral rules outweigh considerations of the goodness of consequences (Kagan, 1998). In many contexts, consequentialism and deontology yield the same judgments for harmful acts; doing harm leads to worse consequences, other things being equal. But strict consequentialism treats prohibitions of harmful acts as akin to rules of thumb that must be broken in cases where doing so would produce better consequences. And, conversely, an important characteristic of deontological judgments is that they are consequence‐insensitive. Importantly, consequentialism and deontology do not exhaust the full range of moral considerations identified by theories of normative ethics. Virtue ethics is a view that focuses on moral character and dispositions or traits that promote human flourishing. Virtues do not involve specific actions (e.g., telling the truth) as much as they represent a person’s longstanding practices that are consistent with an ideal (e.g., being an honest person). Although this perspective has received much less attention from moral psychologists, a growing body of research suggests that people make moral judgments that are consistent with virtue ethics (e.g., Critcher, Inbar, & Pizarro, 2012; Goodwin, Piazza, & Rozin, 2014; Tannenbaum, Uhlmann & Diermeier, 2011; see Pizarro & Tannenbaum, 2011 for a review). We will return to this point when discussing understudied aspects of moral judgment. Protected values. Choice models in behavioral decision theory typically assume that people seek to obtain desirable outcomes. This supposition is central to both normative models (that describe how decision makers should choose; Savage, 1954; von Neumann & 484 Daniel M. Bartels, et al. Morgenstern, 1947) and descriptive models (that describe how people actually choose; Kahneman & Tversky, 1979; Tversky & Kahneman, 1992). However, research also shows that rules play a critical role in decision making (e.g., Amir & Ariely, 2007; March & Heath, 1994). Moral choices represent a useful context to investigate the conflict between rule‐based and consequentialist decision strategies because moral choices are sometimes driven more by ideas about how sacred entities are to be treated (“companies should not be allowed to buy the right to pollute the earth”) than by the direct consequences associated with the action (“even if pollution credits reduce pollution”). In other words, protected values are associated with deontological rules (e.g., “do no harm”; Baron, 1996) and not the overall consequences of those actions (Baron & Ritov, 2009). A standard interpretation of such preferences is that protected values motivate rigid rule‐based decision processes that ignore outcomes. The protected values framework describes morally motivated choice as constrained by an absolutely restrictive set of trade-off rules (Baron & Spranca, 1997). Protected values are exempt from trade-offs with other values; in theory, they cannot be traded off or violated for any reason, no matter the consequences. As such, they are typically measured by presenting respondents with statements concerning the acceptability of trade-offs for some resource and asking them to evaluate its moral status. For example, Ritov and Baron (1999) asked participants to respond to a potential trade-off, and classified people who respond “c” to the item below as those with a protected value for fish species. Causing the extinction of fish species: a) I do not object to this. b) This is acceptable if it leads to some sort of benefits (money or something else) that are great enough. c) This is not acceptable no matter how great the benefits. People who express a protected value for a given issue are more likely to exhibit “quantity insensitivity.” That is, decision makers with protected values relevant to a particular decision may disregard outcomes entirely and view a small violation of a protected value as being equally wrong as larger violations (e.g., Baron & Spranca, 1997). For example, Ritov and Baron (1999) presented participants with a scenario where the only way to save 20 species of fish upstream from a dam on a river would be to open the dam, but opening the dam would kill two species of fish downstream. Participants were then asked (a) whether they would open the dam in this situation, and (b) to identify the maximum number of fish species they would allow to be killed downstream and still decide to open the dam. Participants who had a protected value about fish extinctions (based on the criterion above) were less likely to choose to open the dam (if doing so would kill two species) and more likely to indicate they were unwilling to open the dam if doing so would cause the loss of even a single species, even though not opening the dam would lead to the loss of 20 species. The link between nonconsequentialism and protected values may not be clear at first blush. Presumably, people care deeply about the entities about which they have a protected value (e.g., family, endangered species). So, one would expect people to be sensitive to the consequences that befall these protected entities. However, if protected values motivate nonconsequentialism, then people who really care about an issue will fail to maximize the goodness of outcomes for these entities and they might even appear Moral Judgment and Decision Making 485 comparatively ignorant, insofar as they might not be taking stock of the consequences at all. These restrictive trade-off rules, then, present a major problem that could undercut welfare maximization from a utilitarian policy‐making perspective (Baron & Spranca, 1997). Viewing the results of research on protected values through the lens of moral flexibility suggests that a nonconsequentialist interpretation considers only part of a larger story. Moral decision makers sometimes affirm their protected values by judging a particular action to be wrong, even in the face of undesirable consequences. However, people with protected values are also capable of identifying situations where the benefits would justify trade-offs (Baron & Leshner, 2000), and the relationship between protected values and evaluating moral actions is strongly determined by attentional processes (Bartels, 2008; Iliev et al., 2009; Sachdeva & Medin, 2008). For example, situations that direct people’s attention to the consequences of their choices (and away from the actions that bring them about, like asking them whether they would make the trade-off with varying sets of circumstances) make people with protected values more willing to endorse trade-offs than those without protected values (i.e., the opposite of quantity insensitivity; Bartels & Medin, 2007). In short, it appears that features of the situation play a key role in determining whether and how much people base their choices on rules or consequences. People with protected values sometimes appear to be deontologists who strictly follow rules, and they sometimes appear to be utilitarians who ardently pursue the best consequences. Sacred values and taboo trade-offs. As noted above, people sometimes reason and choose nonconsequentially, such as when they are contemplating the extinction of an endangered species. Although the “protected values” framework (e.g., Baron & Spranca, 1997) addresses these situations, so does a seemingly parallel but distinct literature on “sacred values.” Whereas the literature on protected values has largely focused on the problems that absolutely restrictive trade-off rules pose for seeking utilitarian ends (e.g., crafting optimal policy) and on the cognitive and affective correlates of having protected values (Baron & Spranca, 1997), the literature on sacred values presents a framework, largely derived from sociology, for understanding where such rules might come from (A. P. Fiske & Tetlock, 1997) and for understanding how people manage to navigate flexibly through a world that forces them to make value trade-offs (Tetlock, 2002). The sacred value framework primarily examines exchanges in which decisions are determined by the moral significance attached to the things being exchanged. Certain cherished goods, like human life, health, and nature, are treated by people in some communities as having intrinsic moral or transcendental value. In all but the most extreme circumstances, these sacred values are not to be exchanged for secular values, especially goods that can be bought or sold. For example, selling one’s vote or paying someone else to take one’s place in a military draft seems morally abhorrent to many people. The sacred-values framework explains these instances of non consequentialist judgments as the result of a person having internalized a set of culturally defined norms that constrain the manner in which different types of goods can be permissibly exchanged for each other. Most research on sacred values focuses on the restrictive trade-off rules that suggest that strongly held, situation‐specific values engender deontological decision strategies. For example, people often have strong reservations about market exchanges 486 Daniel M. Bartels, et al. for sex, organs, and adoption (Sondak & Tyler, 2001; Tetlock et al., 2000). People have similar reservations about religious organizations (i.e., a sacred entity) relying on commercial marketing strategies (i.e., a secular solution) to recruit and retain their members (McGraw, Schwartz, & Tetlock, 2012). These effects extend outside the laboratory. A field study involving Palestinians and Israeli settlers facing a (hypothetical) trade-off involving a sacred value (e.g., returning land, recognizing a Palestinian state) reacted with greater outrage when the peace deal was “sweetened” with a monetary offer (e.g., receiving a $1 billion a year for 100 years in money from the United States; Ginges, Atran, Medin, & Shikaki, 2007; see also Deghani et al., 2009, 2010). Sacred–secular exchanges are judged to be morally reprehensible (Tetlock et al., 2000). Tetlock’s (2002) framework suggests that facing a “taboo trade-off” – a sacred‐ for‐secular trade-off, those where only one of two or more resources are treated as morally significant – decision makers view utilitarian considerations (i.e., costs and benefits of alternative courses of action) as off‐limits. When posed with a taboo trade-off, people instead adhere to deontological constraints, affirming their culture’s proscriptive moral rules. People want to avoid taboo trade-offs for interpersonal and intrapersonal reasons. Avoiding taboo trade-offs means avoiding the negative judgments made by others (Tetlock, Peterson, & Lerner, 1996); even knowing that someone merely contemplated such a trade-off is aversive, eliciting contempt, disgust, and the judgment that such contemplation is unforgivable (Tetlock et al., 2000). Also, a decision maker facing a potentially taboo trade-off experiences negative emotions (McGraw & Tetlock, 2005; Tetlock et al., 2000), which they avoid by abandoning trade-off reasoning (McGraw, Tetlock, & Kristel, 2003). For instance, contemplating secular–sacred tradeoffs, such as whether to save money on an apartment or a vehicle by accepting lower levels of safety, leads consumers to abandon trade-off based reasoning to avoid negative emotions (Luce, Payne, & Bettman, 1999, 2000). According to the sacred-values framework, not all morally significant exchanges are impermissible. In situations where only sacred values are at issue – “tragic trade-offs” – people believe it is okay, perhaps even a good idea, to weigh utilitarian considerations. For example, people are not outraged when they learn about a hospital administrator agonizing over a decision about which of two dying children should be given the one life‐saving organ, regardless of the ultimate choice (Tetlock et al., 2000). The permissibility of trade-off goes even further when one takes into account research that investigates how simple rhetorical messages can reframe a taboo trade-off into an acceptable, even routine trade-off (McGraw & Tetlock, 2005; McGraw, Schwartz, & Tetlock, 2012). For instance, the public was outraged when they found that the Clinton administration provided major campaign contributors with a night’s stay in the Lincoln bedroom. That outrage was mitigated when a reciprocity norm was invoked to explain the situations – “friends doing favors for friends” (McGraw & Tetlock, 2005), and this mitigation was especially pronounced for the people – Bill Clinton supporters – who were most motivated to avoid experiencing outrage associated with the idea that rooms in the White House were up for sale as they would be in a hotel. What underlies the moral flexibility demonstrated in the literature on sacred values? Evidence suggests that people understand that moral concepts like harm, equality, or purity can have different meanings depending on the type of social relationship a given situation involves (Rai & Fiske, 2011). For example, even a slight Moral Judgment and Decision Making 487 change in framing a policy decision as military or diplomatic has a profound effect on how people view potential responses to a hostage situation (Ginges & Atran, 2011). According to this view, people’s flexibility in applying moral values across situations is a consequence of how they interpret social situations and implement a finite set of schemata about the nature of the relationships therein. Specifically, social relationships can be grouped into four basic models: communal sharing, authority ranking, equality matching, and market pricing (A. P. Fiske, 1991, 1992; see also Haslam, 2004). The acceptability of a particular trade-off depends on the relational system invoked. In communal sharing relationships, group members (e.g., a family) – but not outsiders – have equal status and expect equal access to shared resources. Authority‐ranking relationships include asymmetry among group members, such that lower ranking members are expected to show deference to higher‐ranking members. Equality-matching relationships are characterized by efforts to balance outcomes across people based on comparisons along one dimension at a time (e.g., turn taking). In market-pricing relationships, people strive to aggregate several dimensions of comparison (e.g., time spent, effort exerted, and output quality) using a common metric (usually money) that makes complex evaluations and exchanges possible. These four basic relational schema facilitate social interaction by allowing people to form expectations for their own and others’ behavior, evaluate exchanges, and identify violations. From this perspective, taboo trade-offs occur when relational schema conflict, such as when a market‐pricing perspective comes into conflict with communal sharing (e.g., someone offers to pay for the part of the Thanksgiving dinner they ate at a friend’s house; A. P. Fiske & Tetlock, 1997). A relational regulation model provides a promising approach to understanding moral flexibility as it represents a framework for understanding when and why moral rules and motives vary across situations and people (e.g., when people care about equality, when they care about equity, and whether they “keep score” at all). So, trade-offs that are common and uncontroversial when viewed through the lens of one relational model can appear bizarre, unacceptable, or offensive when viewed through the lens of another. For example, people routinely set prices based on supply and demand without concern when they perceive the market-pricing schema to apply to the situation. When it comes to fundamental needs – like being able to clear one’s driveway after a snowstorm – people often apply the communal‐sharing model, which causes them to judge people who set very high prices (e.g., for snow shovels) based on low supply and high demand as “price gougers” (Kahneman, Knetsch, & Thaler, 1986). In sum, the sacred-values framework thus links moral values (as internalized norms that afford certain goods moral significance) to deontology in some contexts; utilitarian considerations are off‐limits when contemplating a sacred‐for‐secular exchange. The sentiment is that some goods or services are not exchangeable for money, no matter what favorable consequences might be brought about by the exchange. The framework also identifies situations where consequentialist cognition is permissible (as in tragic trade-offs, like the organ-transplant example, and in secular–secular trade-offs, like purchasing a computer). It further identifies that even in sacred–secular cases, it is possible to attenuate the outrage and contempt associated with taboo trade-offs through rhetorical manipulations that change the social‐ relational context of the trade-off. 488 Daniel M. Bartels, et al. Rules, reason, and emotion in moral trade-offs A large amount of research into moral trade-offs has investigated reactions to the trolley problem as a means to examine the contributions of emotion, reason, automaticity, and cognitive control in moral judgment (Foot, 1967; Thomson, 1985; see also Waldmann, Nagel, & Wiegmann, 2012). In the “bystander” version, a runaway trolley is on a path that will kill five workers on the track ahead, and study participants must decide whether to flip a switch that would divert the trolley onto a sidetrack where it would kill only one person. In the “footbridge” case, five people are similarly threatened, but study participants must decide whether to throw a fat person in front of the trolley (killing him) to save the five on the tracks. A large majority of people say that the one person should be sacrificed for the sake of five in the bystander case, whereas a small minority say that the one person should be sacrificed in the footbridge case (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Hauser, Cushman, Young, Kang‐xing Jin, & Mikhail, 2007; Mikhail, 2009). If people were strictly following a broad deontological rule, such as “It is absolutely forbidden to intentionally kill someone,” they would respond “No” in both cases. If people were strictly following utilitarianism (i.e., bring about the greatest good for the greatest number), they would respond “Yes” in both cases. Therefore, the results show that most people are not rigidly deontological or utilitarian, and researchers have sought to explain what accounts for the flexibility people exhibit when dealing with these cases. Several explanations have been offered for what underlies discrepant responses across versions of the trolley problem. For example, some have argued that the issue is whether an action directly (as in footbridge) or indirectly (as in bystander) causes harm (Royzman & Baron, 2002), whether the causal focus is directed on to the trolley or the people on the track (Waldmann & Dieterich, 2007; Iliev, Sachdeva, & Medin, 2012), whether the action is interpreted as violating a rule in the social contract (Fiddick, Spampinato, & Grafman, 2005), or whether the outcomes are viewed as gains or losses (Petrinovich, O’Neill, & Jorgensen, 1993). We will now move on to provide expanded descriptions of three interpretations that have received considerable attention. One posits opposing roles for affect‐laden intuition versus reflective thought (Greene, 2007). Another decomposes these scenarios into their causal structure and invokes concepts like “assault,” “battery,” and “homicide” to account for judgments (Mikhail, 2007). A third invokes affectively tagged moral rules and consideration of consequences (Nichols & Mallon, 2006). Dual-process morality. One influential model for understanding people’s responses to sacrificial dilemmas like the trolley problem is a dual-system theory that contrasts reflective and emotional, intuitive processing (Greene, 2007; Greene et al., 2001). According to the model, controlled cognitive processes are responsible for welfare‐ maximizing (i.e., utilitarian) choices, such as flipping the switch and pushing the man in the bystander and footbridge versions of the trolley problem. The automatic emotional processes are responsible for choices that correspond to deontological rules, such as the aversion to causing direct harm to the man on the footbridge. So, this view maintains that although deontological philosophy depends on explicit, conscious rules, many deontological‐seeming, nonutilitarian judgments depend on automatic, Moral Judgment and Decision Making 489 emotional intuitions. Central to the theory is a distinction between personal moral dilemmas that involve a strong affective component (such as the footbridge version of the trolley problem, where it is necessary to actually push a person to their death) versus impersonal dilemmas that do not involve such an affective component (such as the bystander version, where you are merely required to flip a switch that redirects the train). Personal dilemmas are proposed to evoke a conflict between utilitarian and deontological considerations, while impersonal dilemmas do not. Several sources of evidence support this dual-process hypothesis. Some studies underscore the role of controlled cognition: functional magnetic resonance imaging reveals correlates of controlled cognition for utilitarian choices (Cushman, Murray, Gordon‐McKeon, Wharton, & Greene, 2011; Greene, Nystrom, Engell, Darley, & Cohen, 2004), time pressure and cognitive load decrease the frequency and speed of utilitarian choices (Suter & Hertwig, 2011; Trémolière, De Neys, & Bonnefon, 2012). Others underscore the role of affect: brain damage to regions that process emotions increases utilitarian responding, (Ciaramelli, Muccioli, Ladavas, & di Pellegrino, 2007; Koenigs et al., 2007; Moretto, Làdavas, Mattioli, & di Pellegrino, 2010) and people who exhibit low levels of affective concern for others make more utilitarian judgments (Bartels & Pizarro, 2011; Gleichgerrcht & Young, 2013). Moral judgments are remarkably malleable: for instance, pharmacological interventions that enhance aversive learning and inhibition promote deontological responses (Crockett, Clark, Hauser, & Robbins, 2010), as do manipulations that encourage participants to imagine the harmful consequences of action in vivid detail (Amit & Greene, 2012; Bartels, 2008). Additionally, people with higher working-memory capacity and those who are more deliberative thinkers are more likely to judge a harmful utilitarian action as permissible (Bartels, 2008; Feltz & Cokely, 2008; Moore, Clark, & Kane, 2008; although, notably, people prone to reflective thought also tend to judge that it is permissible not to act according to utilitarian precepts, a pattern of normative indifference referred to as “moral minimalism”; Royzman, Landy, & Leeman, 2014). At the same time, evidence for a dual-process model of morality has been challenged on empirical and methodological grounds (Baron, Gürçay, Moore, & Starcke, 2012; Kahane et al., 2012; McGuire, Langdon, Coltheart, & Mackenzie, 2009) and also on conceptual grounds (Kvaran & Sanfey, 2010; Moll, De Oliveira‐Souza, & Zahn, 2008; Nucci & Gingo, 2011). A common theme among these critiques is that, ultimately, a sharp division between “cognitive” and “emotional” systems cannot be maintained. Utilitarian judgments require some kind of motivational grounding while, characteristically, deontological judgments require some kind of information processing. In fact, this point is echoed even by proponents of a dual‐process approach (e.g., Cushman, Young, & Greene, 2010). Recently, an alternative dual-process model has been proposed that draws on current neurobiological models of reinforcement learning (Crockett, 2013; Cushman, 2013). These models rely on a broad division between two algorithms for learning and choice. One algorithm assigns values to actions intrinsically based on past experience (e.g., “I just don’t feel right pushing a person,” because of having been scolded for pushing on the playground years ago) and provides an explanation for an intrinsic aversion to harmful actions performed in ways that are more “typical,” such as up‐close and personal harms. The other algorithm derives the value of actions from an internally 490 Daniel M. Bartels, et al. represented causal model of their expected outcomes (e.g., “If I flip this switch, it will send a train down the sidetrack and will kill the person standing there”) and provides an explanation for utilitarian moral judgments. These models of reinforcement learning were developed quite independently of the literature on moral judgment but they revolve around a distinction between good versus bad actions and good versus bad outcomes that resonates deeply with dual-process models of morality (R. Miller, Hannikainen, & Cushman, 2014). Moral grammar. The dual-process model of moral judgment relies largely on a coarse distinction between more “personal” and “impersonal” harms but a wealth of evidence indicates much greater subtlety and organization in moral intuitions. For instance, people condemn actively causing harm more than they do passively allowing harm (Baron & Ritov, 2009; Cushman, Young, & Hauser, 2006; Spranca, Minsk, & Baron, 1991), and they are also more willing to make trade-offs (e.g., sacrificing one life to save another) in cases that involve passive rather than active harm (Goodwin & Landy, 2014). People more strongly condemn actions that involve the direct transfer of bodily force from the actor to the victim (e.g., pushing the man off the footbridge) than those in which no such transfer of “personal force” occurs (e.g., flipping the switch in the bystander version; Cushman et al., 2006; Greene et al., 2009). And the bystander and footbridge versions also differ in whether the choice involves an action that causes harm to someone as a means to save others or as a side effect of saving others (Cushman et al., 2006; Foot, 1967; Mikhail, 2000; Thomson, 1985). In the footbridge version, the potential victim would be used as a “trolley‐stopper,” an instrument to accomplish the goal of saving five others. In the bystander case, however, the potential victim would be collateral damage; his death would be an unfortunate consequence of diverting the train, but saving the five people on the other track would not be a consequence of his death. The theory of universal moral grammar provides a descriptive account of these detailed principles (Mikhail, 2009). It maintains that moral judgment is the product of a single, relatively discrete psychological system (i.e., dedicated to morality) that distills situations into their causal and intentional structure and makes use of rules and legal concepts such as battery, assault, and homicide to interpret important features of situations and produce morally valenced judgments (for another interesting exploration of judgment processes involving legal concepts, see Chapter 26 of this handbook). This system is postulated to be innate and to operate below the level of conscious awareness (cf. Chomsky, 1957, 1965), and whereas some of its rules specify the relative moral value of different outcomes (e.g., human life is good, causing harm is bad), other specify moral constraints on actions that bring about those outcomes (e.g., intentionally killing someone is bad, allowing people to die is bad if an actor could save them without incurring unreasonable costs). Compared with the dualprocess model of moral judgment, universal moral grammar has the virtue of explaining many detailed patterns in judgment that have been repeatedly identified in the literature, and it also provides an appraisal theory for what kinds of actions are likely to trigger specifically moral judgment. On the other hand, universal moral grammar is less suited to explain dissociations between the apparent contributions to moral judgment of automatic versus controlled processes. Moral Judgment and Decision Making 491 Rules and emotions: A potential reconciliation. An alternative model that may reconcile seemingly disparate approaches to understanding affect and cognition in moral judgment contends that moral cognition depends on an “affect‐backed normative theory” (Nichols, 2002; Nichols & Mallon, 2006). Under ordinary circumstances, an act is judged wrong only if it both violates a proscriptive moral rule (e.g., “don’t kill”) and also elicits an affective response. For example, people consider it worse to violate a rule of etiquette that evokes disgust (e.g., spitting in one’s soup and then eating it) than to violate a rule of etiquette that does not (e.g., playing with one’s food; Nichols, 2002). However, in a scenario where a little girl has been instructed by her mother not to break a teacup, people consider her decision to break a teacup to prevent five others from being broken a violation of a rule but not a moral violation because a broken teacup does not elicit a strong emotional response (Nichols & Mallon, 2006). While this theory offers a promising framework, there is substantial evidence that the framework must be a flexible one. Nichols and Mallon found that even affect‐backed moral rules could be overwhelmed by good or bad consequences of great magnitude. For example, when told billions of people would die from a virus released into the atmosphere unless a man is killed, 68% of participants judged that such an action violates a moral rule. However, only 24% judged that the action was morally wrong, all things considered. Adding further detail to the dimensions of flexibility, Bartels (2008) found that people’s moral decisions depended on (a) the moral relevance ascribed to choices (i.e., whether or not they endorsed proscriptive rules for actions), (b) evaluative focus (whether their attention was directed to rule‐violating actions versus the positive consequences these actions produced), and (c) processing style (whether people were likely or unlikely to incorporate affective reactions to rule violations into their moral judgments of right and wrong) – people who were more likely to “trust the gut” made more nonutilitarian judgments than people who were more skeptical of their own intuition. Consistent with the framework set out by Nichols and Mallon, the results demonstrated that moral rules play an important but context‐sensitive role in moral cognition (see also Broeders, van den Bos, Muller, & Ham, 2011). When violations of moral proscriptions were egregious, they generated affective reactions that overwhelmed consideration of the consequences favoring their violation. When attention was directed to the positive consequences of such actions, however, people reluctantly ignored these proscriptions. Moral dilemmas and moral flexibility. Although there is an active debate among the proponents of competing theories of moral dilemmas, there is also an underlying agreement concerning their significance: moral dilemmas exist because we have diverse psychological processes available for making moral judgments, and when two or more processes give divergent answers to the same problem, the result is that we feel “of two minds” (Cushman & Greene, 2012; Sinnott‐Armstrong, 2008d; Sloman, 1996). In a sense, this both compels and enables moral flexibility. It compels flexibility because certain circumstances require uncomfortable trade-offs between competing moral values. But it also enables moral flexibility because it leaves people with an array of potential bases for a favored judgment. It has been suggested that consequentialist reasoning is particularly susceptible to motivated moral reasoning: when people are opposed to an action, they may selectively focus on potential negative consequences of an action and disregard potential positive consequences (Ditto & Liu, 2011). 492 Daniel M. Bartels, et al. Judgments of moral blame and punishment Research into moral trade-offs like the trolley problem revolves around tragically difficult choices that are as fantastical as they are gripping. The role of moral flexibility in situations like these is clear because of the inherent moral conflict that dilemmas engender. But a much more common feature of everyday life is the task of assigning responsibility, blame, and punishment to those around us for more minor infractions: a fender‐bender, a broken promise, or a rude comment, for instance. In this section we describe the basic processes that translate our perception of an event (“The petty cash drawer was empty”) to an assignment of responsibility (“Jim stole the money”) to a punishment (“Jim should be fired”). Although one might suppose that such apparently simple judgments would leave little room for moral flexibility, in fact we find even the process of blame attribution is rife with conflict and compromise between competing moral principles. Building on normative philosophical and legal traditions and classic work on the psychology of attribution, (e.g., Heider, 1958; Kelley, 1973) psychologists have outlined the specific conditions necessary for arriving at a judgment that someone is blameworthy (e.g., Alicke, 2000; Malle, Guglielmo, & Monroe, 2012; Shaver, 1985; Weiner, 1995). When presented with a possible moral infraction, perceivers ask themselves a series of questions about various features of the act, such as whether actors intended the outcome, had control over the outcome, and could foresee the results of their actions. Although the details differ between models, they all share some core features. Perceivers (a) assess whether there is a causally responsible agent, (b) evaluate whether that agent intentionally caused the harm, and (c) assign moral responsibility, blame, and punishment. These steps are typically posited to occur in that temporal order. If all these conditions are met, perceivers conclude that the actor should be held responsible and blamed (or praised, for positive actions); however, several studies have identified instances when this attribution sequence “breaks.” By disrupting the ordinary causal relationship between an actor and a victim, researchers hope to further refine our understanding of how people decide whether a person is to blame and how much punishment (if any) they deserve. Moral luck. Perhaps the simplest way to break the attribution sequence is with an accidental harm (“I thought I was putting sugar in your coffee, but it was rat poison!”) or an attempted harm (“I thought I was putting rat poison in your coffee, but it was sugar!”). In such cases, moral judgments are largely determined by a person’s intentions (Young, Cushman, Hauser, & Saxe, 2007; Young, Nichols, & Saxe 2010). Along similar lines, people tend to discount moral blame when an agent does not act with control over their behavior (Shaver, 1985). For instance, relatives of people suffering from schizophrenia attenuate blame for actions that were undertaken as a direct result of the person’s (uncontrollable) hallucinations and delusions (Provencher & Fincham, 2000). Also, people are more likely to assign blame to AIDS patients if they contracted the disease through controllable means (licentious sexual practices) than if through uncontrollable ones (receiving a tainted blood transfusion; Weiner, 1995). There are some cases, however, in which accidental outcomes can make a surprising difference in our moral judgments. Consider, for example, two drunk drivers who Moral Judgment and Decision Making 493 were involved in accidents. One falls asleep, veers off the road, and strikes a tree, but the other falls asleep, veers off the road, and kills a pedestrian. The driver who kills the pedestrian faces a much stiffer punishment than the one who strikes the tree, a phenomenon is known as “moral luck” in philosophy and law (Hall, 1947; Hart & Honore, 1959; McLaughlin, 1925). In addition, many studies show moral-luck effects in peoples’ intuitive judgments (Berg‐Cross, 1975; Cushman, 2008; Cushman, Dreber, Wang, & Costa, 2009). According to one account of the phenomenon, intent‐based moral judgment and outcome‐based moral judgment operate in competition (Cushman, 2008). A competitive interaction between these two types of judgments may explain why people may feel caught in a dilemma in cases of moral luck. On the one hand, it seems wrong to treat the two drunk drivers differently given their identical behavior. On the other hand, it seems even more wrong to send one to prison for driving under the influence of alcohol, or to let the other off with a ticket for killing a girl. In other words, intent to harm and causal responsibility for harm may not be fused into a single process of blame assignment but, rather, exert independent influences on different categories of moral judgment (see also Buckholtz et al., 2008). Causal deviance. Another example of when the standard attribution sequence breaks down also comes from the philosophical literature on cases of “causally deviant” actions (Searle, 1983). While traditional theories of responsibility specify that an agent should receive blame if she intended and caused an action (e.g., murder), people tend to reduce blame for an outcome if the intention and the cause are not linked tightly. Take this example, adapted from Chisholm (1966): Joe wants to kill his rich uncle, as he stands to inherit a large sum of money. He formulates his plan to murder his uncle and begins the drive to his uncle’s home. Excited at the prospect of soon acquiring a lot of money, Joe is a bit careless at the wheel and hits and kills a pedestrian. This pedestrian turns out to have been his uncle. According to the standard descriptive theories, people should ascribe full blame to Joe for the death of his uncle, as he intended the outcome, and was its sole cause. Yet the “deviant” link between Joe’s intentions and the outcome cause people to find the actions less blameworthy. For instance, in one study researchers provided participants with a description of a woman who successfully murdered her husband by poisoning his favorite dish at a restaurant. Some participants received a modified version of this scenario in which the husband’s death only came about because the poison made the dish taste bad, and the husband ordered a new dish to which he was deathly allergic. Across a variety of these cases, participants reduced blame (as well as praise for positive actions) in the scenarios that included a “deviant” causal chain (Pizarro, Uhlmann, & Bloom, 2003), demonstrating that even in cases where an act is intended and caused, disrupting the causal chain – even if the intentions remain – reduces the willingness of individuals to assign responsibility. Backwards inferences. Ordinarily, attribution‐based accounts of moral responsibility, blame, and punishment assume that people begin with a fixed representation of an event – what a person intended, how they acted, and what harm they caused – and 494 Daniel M. Bartels, et al. then proceed to make a moral evaluation based on this representation. But several revealing cases show influences that work in the opposite direction – whether or not a person acted in a way that is morally wrong can influence perceptions of the actors’ intentions and causal responsibility. Consider, for example, a man speeding home in a rainstorm who gets into an accident and injures others. People are more likely to judge that the man had control over the car if he was speeding home to hide cocaine from his parents than if he was speeding home to hide an anniversary gift for his wife, irrespective of the fact that the factors that led to the accident were identical across both scenarios (Alicke, 1992). According to Alicke, our desire to blame the nefarious “cocaine driver” leads us to distort the criteria of controllability in a fashion that validates this blame. Researchers have also demonstrated similar asymmetries in judgments of intentionality. People are more inclined to say that a consequence was produced intentionally when they regard its consequences as morally wrong than morally right (see Knobe, 2006, for a review). Suppose that the CEO of a company is told that implementing a new policy will have the side effect of either harming or helping the environment. In both cases, the CEO explicitly states that he only cares about increasing profits, not about the incidental side effect of harming or helping the environment. Nonetheless, study participants assert that the CEO who harmed the environment did so intentionally whereas the one who helped the environment did not (Knobe, 2003). Although the mechanisms underlying this effect and the conditions under which it occurs are open questions (see, e.g., Guglielmo & Malle, 2011; Sloman, Fernbach, & Ewing, 2012; Uttich & Lombrozo, 2010), the basic phenomenon is easily replicable and can be observed in children (using simpler scenarios) as young as 6 and 7 years old (Leslie, Knobe, & Cohen, 2006). Blame and moral flexibility. Many everyday cases of moral violations fit a standard sequence in which one person intentionally causes harm to another, and this sequence appears to be reflected in our ordinary attribution processes. However, the research reviewed in this section is suggestive of moral flexibility. Moral responsibility, blame, and punishment vary across contexts, which suggests that more nuanced influences are at play in these judgments. In addition to the studies reviewed above, a grab bag of other factors exacerbate or moderate judgments of deserved punishment, including but not limited to the induction of incidental anger (i.e., anger about something unrelated to the focal event; Lerner, Goldberg, & Tetlock, 1998), whether the actor apologizes to the victim, acknowledges guilt, and/or is granted forgiveness by the victim (Robinson, Jackowitz, & Bartels, 2012), and whether the action is described as happening in the past or in the future (Burns, Caruso, & Bartels, 2012) or cultural differences in the ways that perceived intentions, actions, and outcomes relate to judgments of responsibility and morality (Cohen & Rozin, 2001; J. G. Miller & Bersoff, 1992). Although no model explains all of the phenomena mentioned above, we can gain some understanding by (a) examining dissociations between the psychological processes that determine the culpability of acts based on intentions versus those that determine culpability based on outcomes, and (b) further probing the psychological processes that cause assessments of causation, control, and intent to be reciprocally influenced by the moral status of the action that a person performs. Moral Judgment and Decision Making 495 Summary of evidence for moral flexibility If there is one undisputed fact about the human capacity for moral judgment, it is that the capacity itself comprises a diverse array of distinct psychological processes. These processes often operate in competition, and the moral dilemmas that result provide fertile ground for the work of philosophers and psychologists alike. They also give people the challenge of reconciling diverse moral concerns and the opportunity to selectively recruit moral principles to support a favored judgment. This tremendous complexity gives rise to methodological and theoretical challenges for research in moral psychology. Next, we sketch some of the challenges and suggest a few promising paths forward for the field. Methodological Desiderata Exercise caution when making comparisons to normative standards One research strategy that has been used in judgment and decision making research is to (a) identify a normative model, (b) demonstrate ways that people’s responses systematically diverge from the predictions of the normative model, and (c) treat these “errors” as diagnostic of mental processes (e.g., Kahneman, 2000; Shafir & LeBoeuf, 2002). Sunstein (2005) uses this method to identify what he calls “moral heuristics.” While adhering to this error‐and‐bias approach makes sense across many domains of choice where there is widespread agreement about the normative standard (such as probabilistic judgments), it is unclear that the approach is appropriate for ethical judgment given how little agreement there is among experts or lay judges about the “right” answer. For example, a survey of 73 professors with a PhD in philosophy and primary area of specialization in ethics revealed that 37% endorse deontological principles, 27% endorse utilitarian principles, 22% endorse virtue ethics, and 14% endorse none of the above (personal communication: reanalysis of data reported in Schwitzgebel & Cushman, 2012). In short, comparison to normative standards is more problematic for moral than for nonmoral judgments and decisions owing to the lack of consensus (or even a majority opinion) about which normative theory provides the right answer about how to act across situations. Nonetheless, the error‐and‐bias approach to moral judgment is common in the literature. Several prominent researchers in the field adopt utilitarianism as the normative standard of comparison for ethical judgment and categorize deontological judgments as heuristics that can give rise to “errors” when they conflict with the greater good. Sunstein (2005), for example, adopts this approach and argues that deontological “rules of thumb” give rise to “mistaken and even absurd moral judgments” (p. 531). For Sunstein, the implication is that we should be wary of deontological intuitions as they are likely to be “unreliable” and “unsound,” and that these intuitions ought to be deprecated when making decisions about law and public policy (for critiques of this approach, see Bauman & Skitka, 2009a; Mikhail, 2005; Pizarro & Uhlmann, 2005; Tetlock, 2005). In addition to the problem of disagreement over normative standards, the standard manner used to assess moral judgments (i.e., through the use of sparsely described 496 Daniel M. Bartels, et al. moral trade-off scenarios that pit a utilitarian option against a deontological option) might not identify the psychological mechanisms that give rise to the response. In the footbridge version of the trolley problem, for example, participants may choose to push the fat man because (a) they care so much about saving lives that they are reluctantly willing to do what would otherwise be a horrible thing, or (b) because killing someone is not as aversive to them as it is to others. Pushing the man off the footbridge is the “optimal” response for a researcher adopting utilitarianism as the normative standard, but simply recording participants’ choices offers no way to distinguish between “real” utilitarians from potential psychopaths. In fact, some evidence suggests that these dilemmas are likely capturing the latter. People who score higher in antisocial personality traits, including psychopathy, Machiavellianism, and the perception that life is meaningless, are more likely to push the fat man and provide seemingly “utilitarian” responses in other similar dilemmas (Bartels & Pizarro, 2011). Although identical responses to these dilemmas may reflect very different sets of intuitions, some have argued that no intuitions generated by these artificial problems are trustworthy. For example, Hare writes, Undoubtedly, critics of utilitarianism will go on trying to produce examples which are both fleshed out and reasonably likely to occur, and also support their argument. I am prepared to bet, however, that the nearer they get to realism and specificity, and the further from playing trains – a sport which has had such a fascination for them – the more likely the audience is, on reflection, to accept the utilitarian solution. (1981, p. 139) These intuitions may also be a result of features that even respondents would agree are morally irrelevant (e.g., Ditto, Pizarro, & Tannenbaum, 2009; Uhlmann, Pizarro, Tannenbaum, & Ditto, 2009) and as such might not even derive from the application of a moral principle. In short, there are good reasons to question some widespread methodological practices that are used in the study of moral judgment: the adoption of a normative standard to assess “errors” of moral judgment, and the reliance on sacrificial moral dilemmas. An overreliance on these methods may prevent us from uncovering the subtlety and complexity of our everyday moral psychology.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Effects of the CEO’s Perceptual Bias in Economic Decision-Making and Judgment on the Capabilities of the Financial Reporting Quality

The current research sets out to identify and scrutinize the impact of the CEO’s perceptual biases in judgment and economic decision-making on the reporting quality of the firms listed on the Tehran Stock Exchange. Adopting a mixed method, the present study first seeks to detect the components and indices of CEO’s perceptual biases via critical appraisal and with the special participation of 10...

متن کامل

An Investigation of the Effect of Client Characteristics on Auditor Advocacy Attitude in Judgment and Decision-Making Processes

Over recent years, researchers have focused on factors shaping auditor advocacy attitude. The literature review shows that auditors’ attitudes towards clients are among factors affecting judgment and decision-making processes. The main objective of the present study was to investigate the effect of client importance and image on auditor advocacy attitude in judgment and decision-making processe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016